In statistics and signal processing, an autoregressive (AR) model is a type of random process which is often used to model and predict various types of natural phenomena. The autoregressive model is one of a group of linear prediction formulas that attempt to predict an output of a system based on the previous outputs.
Contents |
The notation AR(p) indicates an autoregressive model of order p. The AR(p) model is defined as
where are the parameters of the model, is a constant (often omitted for simplicity) and is white noise.
An autoregressive model can thus be viewed as the output of an all-pole infinite impulse response filter whose input is white noise.
Some constraints are necessary on the values of the parameters of this model in order that the model remains wide-sense stationary. For example, processes in the AR(1) model with |φ1| ≥ 1 are not stationary. More generally, for an AR(p) model to be wide-sense stationary, the roots of the polynomial must lie within the unit circle, i.e., each root must satisfy .
An AR(1)-process is given by:
where is a white noise process with zero mean and variance . (Note: The subscript on has been dropped.) The process is wide-sense stationary if since it is obtained as the output of a stable filter whose input is white noise. (If then has infinite variance, and is therefore not wide sense stationary.) Consequently, assuming , the mean is identical for all values of t. If the mean is denoted by , it follows from
that
and hence
In particular, if , then the mean is 0.
The variance is
where is the standard deviation of . This can be shown by noting that
and then by noticing that the quantity above is a stable fixed point of this relation.
The autocovariance is given by
It can be seen that the autocovariance function decays with a decay time (also called time constant) of [to see this, write where is independent of . Then note that and match this to the exponential decay law ].
The spectral density function is the Fourier transform of the autocovariance function. In discrete terms this will be the discrete-time Fourier transform:
This expression is periodic due to the discrete nature of the , which is manifested as the cosine term in the denominator. If we assume that the sampling time () is much smaller than the decay time (), then we can use a continuum approximation to :
which yields a Lorentzian profile for the spectral density:
where is the angular frequency associated with the decay time .
An alternative expression for can be derived by first substituting for in the defining equation. Continuing this process N times yields
For N approaching infinity, will approach zero and:
It is seen that is white noise convolved with the kernel plus the constant mean. If the white noise is a Gaussian process then is also a Gaussian process. In other cases, the central limit theorem indicates that will be approximately normally distributed when is close to one.
There are many ways to estimate the coefficients: the OLS procedure, method of moments (through Yule Walker equations),MCMC.
The AR(p) model is given by the equation
It is based on parameters where i = 1, ..., p. There is a direct correspondence between these parameters and the covariance function of the process, and this correspondence can be inverted to determine the parameters from the autocorrelation function (which is itself obtained from the covariances). This is done using the Yule-Walker equations.
The Yule-Walker equations are the following set of equations.
where m = 0, ... , p, yielding p + 1 equations. is the autocorrelation function of X, is the standard deviation of the input noise process, and is the Kronecker delta function.
Because the last part of the equation is non-zero only if m = 0, the equation is usually solved by representing it as a matrix for m > 0, thus getting equation
solving all . For m = 0 we have
which allows us to solve .
The above equations (the Yule-Walker equations) provide one route to estimating the parameters of an AR(p) model, by replacing the theoretical covariances with estimated values. One way of specifying the estimated covariances is equivalent to a calculation using least squares regression of values Xt on the p previous values of the same series.
Another usage is calculating the first p+1 elements of the auto-correlation function. The full auto-correlation function can then be derived by recursively calculating [1]
The equation defining the AR process is
Multiplying both sides by Xt − m and taking expected value yields
Now, by definition of the autocorrelation function. The values of the noise function are independent of each other, and Xt − m is independent of εt where m is greater than zero. For m > 0, E[εtXt − m] = 0. For m = 0,
Now we have, for m ≥ 0,
Furthermore,
which yields the Yule-Walker equations:
for m ≥ 0. For m < 0,
The power spectral density of an AR(p) process with noise variance is[1]
For white noise (AR(0))
For AR(1)
AR(2) processes can be split into three groups depending on the characteristics of their roots:
Otherwise the process has real roots, and:
The process is stable when the roots are within the unit circle, or equivalently when the coefficients are in the triangle .
The full PSD function can be expressed in real form as:
Auto-correlation function of an AR(p) process can be expressed as
where are the roots of the polynomial
Auto-correlation function of an AR(p) process is a sum of decaying exponential.